48 research outputs found

    Image-based quantitative analysis of gold immunochromatographic strip via cellular neural network approach

    Get PDF
    "(c) 2014 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other users, including reprinting/ republishing this material for advertising or promotional purposes, creating new collective works for resale or redistribution to servers or lists, or reuse of any copyrighted components of this work in other works."Gold immunochromatographic strip assay provides a rapid, simple, single-copy and on-site way to detect the presence or absence of the target analyte. This paper aims to develop a method for accurately segmenting the test line and control line of the gold immunochromatographic strip (GICS) image for quantitatively determining the trace concentrations in the specimen, which can lead to more functional information than the traditional qualitative or semi-quantitative strip assay. The canny operator as well as the mathematical morphology method is used to detect and extract the GICS reading-window. Then, the test line and control line of the GICS reading-window are segmented by the cellular neural network (CNN) algorithm, where the template parameters of the CNN are designed by the switching particle swarm optimization (SPSO) algorithm for improving the performance of the CNN. It is shown that the SPSO-based CNN offers a robust method for accurately segmenting the test and control lines, and therefore serves as a novel image methodology for the interpretation of GICS. Furthermore, quantitative comparison is carried out among four algorithms in terms of the peak signal-to-noise ratio. It is concluded that the proposed CNN algorithm gives higher accuracy and the CNN is capable of parallelism and analog very-large-scale integration implementation within a remarkably efficient time

    Unlearnable Examples for Diffusion Models: Protect Data from Unauthorized Exploitation

    Full text link
    Diffusion models have demonstrated remarkable performance in image generation tasks, paving the way for powerful AIGC applications. However, these widely-used generative models can also raise security and privacy concerns, such as copyright infringement, and sensitive data leakage. To tackle these issues, we propose a method, Unlearnable Diffusion Perturbation, to safeguard images from unauthorized exploitation. Our approach involves designing an algorithm to generate sample-wise perturbation noise for each image to be protected. This imperceptible protective noise makes the data almost unlearnable for diffusion models, i.e., diffusion models trained or fine-tuned on the protected data cannot generate high-quality and diverse images related to the protected training data. Theoretically, we frame this as a max-min optimization problem and introduce EUDP, a noise scheduler-based method to enhance the effectiveness of the protective noise. We evaluate our methods on both Denoising Diffusion Probabilistic Model and Latent Diffusion Models, demonstrating that training diffusion models on the protected data lead to a significant reduction in the quality of the generated images. Especially, the experimental results on Stable Diffusion demonstrate that our method effectively safeguards images from being used to train Diffusion Models in various tasks, such as training specific objects and styles. This achievement holds significant importance in real-world scenarios, as it contributes to the protection of privacy and copyright against AI-generated content

    Flew Over Learning Trap: Learn Unlearnable Samples by Progressive Staged Training

    Full text link
    Unlearning techniques are proposed to prevent third parties from exploiting unauthorized data, which generate unlearnable samples by adding imperceptible perturbations to data for public publishing. These unlearnable samples effectively misguide model training to learn perturbation features but ignore image semantic features. We make the in-depth analysis and observe that models can learn both image features and perturbation features of unlearnable samples at an early stage, but rapidly go to the overfitting stage since the shallow layers tend to overfit on perturbation features and make models fall into overfitting quickly. Based on the observations, we propose Progressive Staged Training to effectively prevent models from overfitting in learning perturbation features. We evaluated our method on multiple model architectures over diverse datasets, e.g., CIFAR-10, CIFAR-100, and ImageNet-mini. Our method circumvents the unlearnability of all state-of-the-art methods in the literature and provides a reliable baseline for further evaluation of unlearnable techniques

    ANPL: Compiling Natural Programs with Interactive Decomposition

    Full text link
    The advents of Large Language Models (LLMs) have shown promise in augmenting programming using natural interactions. However, while LLMs are proficient in compiling common usage patterns into a programming language, e.g., Python, it remains a challenge how to edit and debug an LLM-generated program. We introduce ANPL, a programming system that allows users to decompose user-specific tasks. In an ANPL program, a user can directly manipulate sketch, which specifies the data flow of the generated program. The user annotates the modules, or hole with natural language descriptions offloading the expensive task of generating functionalities to the LLM. Given an ANPL program, the ANPL compiler generates a cohesive Python program that implements the functionalities in hole, while respecting the dataflows specified in sketch. We deploy ANPL on the Abstraction and Reasoning Corpus (ARC), a set of unique tasks that are challenging for state-of-the-art AI systems, showing it outperforms baseline programming systems that (a) without the ability to decompose tasks interactively and (b) without the guarantee that the modules can be correctly composed together. We obtain a dataset consisting of 300/400 ARC tasks that were successfully decomposed and grounded in Python, providing valuable insights into how humans decompose programmatic tasks. See the dataset at https://iprc-dip.github.io/DARC

    Synchronization of stochastic genetic oscillator networks with time delays and Markovian jumping parameters

    Get PDF
    The official published version of the article can be found at the link below.Genetic oscillator networks (GONs) are inherently coupled complex systems where the nodes indicate the biochemicals and the couplings represent the biochemical interactions. This paper is concerned with the synchronization problem of a general class of stochastic GONs with time delays and Markovian jumping parameters, where the GONs are subject to both the stochastic disturbances and the Markovian parameter switching. The regulatory functions of the addressed GONs are described by the sector-like nonlinear functions. By applying up-to-date ‘delay-fractioning’ approach for achieving delay-dependent conditions, we construct novel matrix functional to derive the synchronization criteria for the GONs that are formulated in terms of linear matrix inequalities (LMIs). Note that LMIs are easily solvable by the Matlab toolbox. A simulation example is used to demonstrate the synchronization phenomena within biological organisms of a given GON and therefore shows the applicability of the obtained results.This work was supported in part by the Biotechnology and Biological Sciences Research Council (BBSRC) of the UK under Grants BB/C506264/1 and 100/EGM17735, the Royal Society of the UK, the National Natural Science Foundation of China under Grant 60804028, the Teaching and Research Fund for Excellent Young Teachers at Southeast University of China, the International Science and Technology Cooperation Project of China under Grant 2009DFA32050, and the Alexander von Humboldt Foundation of Germany
    corecore